Transmission Network Dynamic Planning Based on a Double Deep-Q Network With Deep ResNet

نویسندگان

چکیده

Based on a Double Deep-Q Network with deep ResNet (DDQN-ResNet), this paper proposes novel method for transmission network expansion planning (TNEP). Since TNEP is large scale and mixed-integer linear programming (MILP) problem, as the optimal constraints increase, numerical calculation heuristic learning-based methods suffer from heavy computational complexities in training. Besides, due to black box characteristic, solution processes of are inexplicable usually require repeated By using DDQN-ResNet, constructs high-performance flexible solve large-scale complex-constrained problem. Firstly, we form two-objective model, which one objective minimize comprehensive cost, another maximize reliability. The cost takes into account loss maintenance cost. reliability evaluated by expected energy not served (EENS) electrical betweenness. Secondly, task constructed based Markov decision process. abstracting task, environment obtained DDQN-ResNet. In addition, identify construction value lines, an agent establish Finally, perform static visualize reinforcement learning dynamic realized reusing training experience. validity flexibility DDQN-ResNet verified RTS 24-bus test system.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Dynamic Frame skip Deep Q Network

Deep Reinforcement Learning methods have achieved state of the art performance in learning control policies for the games in the Atari 2600 domain. One of the important parameters in the Arcade Learning Environment (ALE, [Bellemare et al., 2013]) is the frame skip rate. It decides the granularity at which agents can control game play. A frame skip value of k allows the agent to repeat a selecte...

متن کامل

Variational Deep Q Network

We propose a framework that directly tackles the probability distribution of the value function parameters in Deep Q Network (DQN), with powerful variational inference subroutines to approximate the posterior of the parameters. We will establish the equivalence between our proposed surrogate objective and variational inference loss. Our new algorithm achieves efficient exploration and performs ...

متن کامل

Deep Attention Recurrent Q-Network

A deep learning approach to reinforcement learning led to a general learner able to train on visual input to play a variety of arcade games at the human and superhuman levels. Its creators at the Google DeepMind’s team called the approach: Deep Q-Network (DQN). We present an extension of DQN by “soft” and “hard” attention mechanisms. Tests of the proposed Deep Attention Recurrent Q-Network (DAR...

متن کامل

Implementing the Deep Q-Network

The Deep Q-Network proposed by Mnih et al. [2015] has become a benchmark and building point for much deep reinforcement learning research. However, replicating results for complex systems is often challenging since original scientific publications are not always able to describe in detail every important parameter setting and software engineering solution. In this paper, we present results from...

متن کامل

Anomaly-based Web Attack Detection: The Application of Deep Neural Network Seq2Seq With Attention Mechanism

Today, the use of the Internet and Internet sites has been an integrated part of the people’s lives, and most activities and important data are in the Internet websites. Thus, attempts to intrude into these websites have grown exponentially. Intrusion detection systems (IDS) of web attacks are an approach to protect users. But, these systems are suffering from such drawbacks as low accuracy in ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Access

سال: 2021

ISSN: ['2169-3536']

DOI: https://doi.org/10.1109/access.2021.3083266